Richmond
Deep learning-based modularized loading protocol for parameter estimation of Bouc-Wen class models
Oh, Sebin, Song, Junho, Kim, Taeyong
This study proposes a modularized deep learning-based loading protocol for optimal parameter estimation of Bouc-Wen (BW) class models. The protocol consists of two key components: optimal loading history construction and CNN-based rapid parameter estimation. Each component is decomposed into independent sub-modules tailored to distinct hysteretic behaviors-basic hysteresis, structural degradation, and pinching effect-making the protocol adaptable to diverse hysteresis models. Three independent CNN architectures are developed to capture the path-dependent nature of these hysteretic behaviors. By training these CNN architectures on diverse loading histories, minimal loading sequences, termed \textit{loading history modules}, are identified and then combined to construct an optimal loading history. The three CNN models, trained on the respective loading history modules, serve as rapid parameter estimators. Numerical evaluation of the protocol, including nonlinear time history analysis of a 3-story steel moment frame and fragility curve construction for a 3-story reinforced concrete frame, demonstrates that the proposed protocol significantly reduces total analysis time while maintaining or improving estimation accuracy. The proposed protocol can be extended to other hysteresis models, suggesting a systematic approach for identifying general hysteresis models.
- North America > United States > California > Alameda County > Berkeley (0.14)
- Asia > South Korea > Seoul > Seoul (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (7 more...)
How AI-powered tech landed man in jail with scant evidence
Michael Williams' wife pleaded with him to remember their fishing trips with the grandchildren, how he used to braid her hair, anything to jar him back to his world outside the concrete walls of Cook County Jail. His three daily calls to her had become a lifeline, but when they dwindled to two, then one, then only a few a week, the 65-year-old Williams felt he couldn't go on. He made plans to take his life with a stash of pills he had stockpiled in his dormitory. Williams was jailed last August, accused of killing a young man from the neighborhood who asked him for a ride during a night of unrest over police brutality in May. But the key evidence against Williams didn't come from an eyewitness or an informant; it came from a clip of noiseless security video showing a car driving through an intersection, and a loud bang picked up by a network of surveillance microphones. Prosecutors said technology powered by a secret algorithm that analyzed noises detected by the sensors indicated Williams shot and killed the man. "I kept trying to figure out, how can they get away with using the technology like that against me?" said Williams, speaking publicly for the first time about his ordeal. Williams sat behind bars for nearly a year before a judge dismissed the case against him last month at the request of prosecutors, who said they had insufficient evidence.
- North America > United States > Illinois > Cook County > Chicago (0.07)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- South America (0.04)
- (15 more...)
Automated Lane Change Strategy using Proximal Policy Optimization-based Deep Reinforcement Learning
Ye, Fei, Cheng, Xuxin, Wang, Pin, Chan, Ching-Yao
Lane-change maneuvers are commonly executed by drivers to follow a certain routing plan, overtake a slower vehicle, adapt to a merging lane ahead, etc. However, improper lane change behaviors can be a major cause of traffic flow disruptions and even crashes. While many rule-based methods have been proposed to solve lane change problems for autonomous driving, they tend to exhibit limited performance due to the uncertainty and complexity of the driving environment. Machine learning-based methods offer an alternative approach, as Deep reinforcement learning (DRL) has shown promising success in many application domains including robotic manipulation, navigation, and playing video games. However, applying DRL for autonomous driving still faces many practical challenges in terms of slow learning rates, sample inefficiency, and non-stationary trajectories. In this study, we propose an automated lane change strategy using proximal policy optimization-based deep reinforcement learning, which shows great advantage in learning efficiency while maintaining stable performance. The trained agent is able to learn a smooth, safe, and efficient driving policy to determine lane-change decisions (i.e. when and how) even in dense traffic scenarios. The effectiveness of the proposed policy is validated using task success rate and collision rate, which demonstrates the lane change maneuvers can be efficiently learned and executed in a safe, smooth and efficient manner.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > California > Contra Costa County > Richmond (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > Middle East > Jordan (0.04)
Adversarial Inverse Reinforcement Learning for Decision Making in Autonomous Driving
Wang, Pin, Liu, Dapeng, Chen, Jiayu, Chan, Ching-Yao
Adversarial Inverse Reinforcement Learning for Decision Making in Autonomous Driving Pin Wang 1, Dapeng Liu 2, 3, Jiayu Chen 4, and Ching-Y ao Chan 1 Abstract -- Generative Adversarial Imitation Learning (GAIL) is an efficient way to learn sequential control strategies from demonstration. Adversarial Inverse Reinforcement Learning (AIRL) is similar to GAIL but also learns a reward function at the same time and has better training stability. In previous work, however, AIRL has mostly been demonstrated on robotic control in artificial environments. In this paper, we apply AIRL to a practical and challenging problem - the decision-making in autonomous driving, and also augment AIRL with a semantic reward to improve its performance. We use four metrics to evaluate its learning performance in a simulated driving environment. Results show that the vehicle agent can learn decent decision-making behaviors from scratch, and can reach a level of performance comparable with that of an expert. Additionally, the comparison with GAIL shows that AIRL converges faster, achieves better and more stable performance than GAIL. I. INTRODUCTION The application of Reinforcement Learning (RL) in robotics has been very fruitful in recent years.
- North America > United States > California > Contra Costa County > Richmond (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Sweden > Vaestra Goetaland > Gothenburg (0.04)
- (2 more...)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks (1.00)
- Information Technology > Robotics & Automation (0.91)
Continuous Control for Automated Lane Change Behavior Based on Deep Deterministic Policy Gradient Algorithm
Wang, Pin, Li, Hanhan, Chan, Ching-Yao
Lane change is a challenging task which requires delicate actions to ensure safety and comfort. Some recent studies have attempted to solve the lane-change control problem with Reinforcement Learning (RL), yet the action is confined to discrete action space. To overcome this limitation, we formulate the lane change behavior with continuous action in a model-free dynamic driving environment based on Deep Deterministic Policy Gradient (DDPG). The reward function, which is critical for learning the optimal policy, is defined by control values, position deviation status, and maneuvering time to provide the RL agent informative signals. The RL agent is trained from scratch without resorting to any prior knowledge of the environment and vehicle dynamics since they are not easy to obtain. Seven models under different hyperparameter settings are compared. A video showing the learning progress of the driving behavior is available. It demonstrates the RL vehicle agent initially runs out of road boundary frequently, but eventually has managed to smoothly and stably change to the target lane with a success rate of 100% under diverse driving situations in simulation.
- North America > United States > California > Contra Costa County > Richmond (0.14)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Asia > Middle East > Jordan (0.04)
Driving Decision and Control for Autonomous Lane Change based on Deep Reinforcement Learning
Shi, Tianyu, Wang, Pin, Cheng, Xuxin, Chan, Ching-Yao
We apply Deep Q-network (DQN) with the consideration of safety during the task for deciding whether to conduct the maneuver. Furthermore, we design two similar Deep Q learning frameworks with quadratic approximator for deciding how to select a comfortable gap and just follow the preceding vehicle. Finally, a polynomial lane change trajectory is generated and Pure Pursuit Control is implemented for path tracking. We demonstrate the effectiveness of this framework in simulation, from both the decision-making and control layers. The proposed architecture also has the potential to be extended to other autonomous driving scenarios.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > California > Contra Costa County > Richmond (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Automobiles & Trucks (0.89)
- Transportation > Ground > Road (0.49)
- Information Technology > Robotics & Automation (0.35)
Learning Spatiotemporal Features of Ride-sourcing Services with Fusion Convolutional Network
Zhang, Dapeng, Xiao, Feng, Li, Lu, Kou, Gang
In order to collectively forecast the demand of ride-sourcing services in all regions of a city, convolutional neural networks (CNNs) have been applied with commendable results. However, local statistical differences throughout the geographical layout of the city make the spatial stationarity assumption of the convolution invalid, which limits the performance of CNNs on demand forecasting task. Hence, we propose a novel deep learning framework called LC-ST-FCN (locally-connected spatiotemporal fully-convolutional neural network) that consists of a stack of 3D convolutional layers, 2D (standard) convolutional layers, and locally connected convolutional layers. This fully convolutional architecture maintains the spatial coordinates of the input and no spatial information is lost between layers. Features are fused across layers to define a tunable nonlinear local-to-global-to-local representation, where both global and local statistics can be learned to improve predictive performance. Furthermore, as the local statistics vary from region to region, the arithmetic-mean-based metrics frequently used in spatial stationarity situations cannot effectively evaluate the models. We propose a weighted-arithmetic approach to deal with this situation. In the experiments, a real dataset from a ride-sourcing service platform (DiDiChuxing) is used, which demonstrates the effectiveness and superiority of our proposed model and evaluation method.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Asia > China > Sichuan Province > Chengdu (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- (4 more...)
- Transportation > Ground > Road (0.93)
- Transportation > Passenger (0.69)
The 10 Top Robotics Investments in January 2019 Analytics Insight
Robotics investments in January 2019 have crossed a minimum of $644 million worldwide, armed with a total of 25 robotics transactions. The $644 million raised in January is lower than the funding into this industry raised in December in tune of $652.7 million. One of the biggest investments in January that is $104 million Series A has been made into the Beijing Auto AI Technology Co. of China. Other notable investments in January 2019 into Robotics include the $100 million JV into Ekso Bionics Holdings Inc. and a $59.61 million Series B funding into China-based NASN Automotive Electronics Co. Here are the Top 10 Investments that ruled the Robotics Technologies space in January 2019.
A Data Driven Method of Optimizing Feedforward Compensator for Autonomous Vehicle
Shi, Tianyu, Wang, Pin, Chan, Ching-Yao, Zou, Chonghao
A reliable controller is critical and essential for the execution of safe and smooth maneuvers of an autonomous vehicle.The controller must be robust to external disturbances, such as road surface, weather, and wind conditions, and so on.It also needs to deal with the internal parametric variations of vehicle sub-systems, including power-train efficiency, measurement errors, time delay,so on.Moreover, as in most production vehicles, the low-control commands for the engine, brake, and steering systems are delivered through separate electronic control units.These aforementioned factors introduce opaque and ineffectiveness issues in controller performance.In this paper, we design a feed-forward compensate process via a data-driven method to model and further optimize the controller performance.We apply the principal component analysis to the extraction of most influential features.Subsequently,we adopt a time delay neural network and include the accuracy of the predicted error in a future time horizon.Utilizing the predicted error,we then design a feed-forward compensate process to improve the control performance.Finally,we demonstrate the effectiveness of the proposed feed-forward compensate process in simulation scenarios.
- Asia > China > Beijing > Beijing (0.04)
- North America > United States > New York (0.04)
- North America > United States > California > Contra Costa County > Richmond (0.04)
- (2 more...)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.94)
AI for Crime Prevention and Detection - 5 Current Applications
Companies and cities all over world are experimenting with using artificial intelligence to reduce and prevent crime, and to more quickly respond to crimes in progress. The ideas behind many of these projects is that crimes are relatively predictable; it just requires being able to sort through a massive volume of data to find patterns that are useful to law enforcement. This kind of data analysis was technologically impossible a few decades ago, but the hope is that recent developments in machine learning are up to the task. There is good reason why companies and government are both interested in trying to use AI in this manner. As of 2010, the United States spent over $80 billion a year on incarations at the state, local, and federal levels. Estimates put the United States' total spending on law enforcement at over $100 billion a year. Law enforcement and prisons make up a substantial percentage of local government budgets.
- North America > United States > Wisconsin (0.04)
- North America > United States > Washington > Pierce County > Tacoma (0.04)
- North America > United States > Pennsylvania (0.04)
- (16 more...)